Introduction to the Indian Buffet Process: Theory and Applications
نویسندگان
چکیده
منابع مشابه
The Indian Buffet Process: An Introduction and Review
The Indian buffet process is a stochastic process defining a probability distribution over equivalence classes of sparse binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features, or that involve bipartite graphs in which the size ...
متن کاملBayesian Statistics: Indian Buffet Process
A common goal of unsupervised learning is to discover the latent variables responsible for generating the observed properties of a set of objects. For example, factor analysis attempts to find a set of latent variables (or factors) that explain the correlations among the observed variables. A problem with factor analysis, however, is that the user has to specify the number of latent variables w...
متن کاملVariational Inference for the Indian Buffet Process
The Indian Buffet Process (IBP) is a nonparametric prior for latent feature models in which observations are influenced by a combination of hidden features. For example, images may be composed of several objects and sounds may consist of several notes. Latent feature models seek to infer these unobserved features from a set of observations; the IBP provides a principled prior in situations wher...
متن کاملSupplement to: Scaling the Indian Buffet Process via Submodular Maximization
Here we discuss the “shifted” equivalence class of binary matrices first proposed by Ding et al. (2010). For a given N ×K binary matrix Z, the equivalence class for this binary matrix [Z] is obtained by shifting allzero columns to the right of the non-zero columns while maintaining the non-zero column orderings, see Figure 1. Placing independent Beta( α K , 1) priors on the Bernoulli entries of...
متن کاملIndian Buffet Process Dictionary Learning : algorithms
Ill-posed inverse problems call for some prior model to define a suitable set of solutions. A wide family of approaches relies on the use of sparse representations. Dictionary learning precisely permits to learn a redundant set of atoms to represent the data in a sparse manner. Various approaches have been proposed, mostly based on optimization methods. We propose a Bayesian non parametric appr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Korean Journal of Applied Statistics
سال: 2015
ISSN: 1225-066X
DOI: 10.5351/kjas.2015.28.2.251